114 research outputs found

    Getting Close Without Touching: Near-Gathering for Autonomous Mobile Robots

    Get PDF
    In this paper we study the Near-Gathering problem for a finite set of dimensionless, deterministic, asynchronous, anonymous, oblivious and autonomous mobile robots with limited visibility moving in the Euclidean plane in Look-Compute-Move (LCM) cycles. In this problem, the robots have to get close enough to each other, so that every robot can see all the others, without touching (i.e., colliding with) any other robot. The importance of solving the Near-Gathering problem is that it makes it possible to overcome the restriction of having robots with limited visibility. Hence it allows to exploit all the studies (the majority, actually) done on this topic in the unlimited visibility setting. Indeed, after the robots get close enough to each other, they are able to see all the robots in the system, a scenario that is similar to the one where the robots have unlimited visibility. We present the first (deterministic) algorithm for the Near-Gathering problem, to the best of our knowledge, which allows a set of autonomous mobile robots to nearly gather within finite time without ever colliding. Our algorithm assumes some reasonable conditions on the input configuration (the Near-Gathering problem is easily seen to be unsolvable in general). Further, all the robots are assumed to have a compass (hence they agree on the "North" direction), but they do not necessarily have the same handedness (hence they may disagree on the clockwise direction). We also show how the robots can detect termination, i.e., detect when the Near-Gathering problem has been solved. This is crucial when the robots have to perform a generic task after having nearly gathered. We show that termination detection can be obtained even if the total number of robots is unknown to the robots themselves (i.e., it is not a parameter of the algorithm), and robots have no way to explicitly communicate.Comment: 25 pages, 8 fiugre

    Analyzing and Comparing On-Line News Sources via (Two-Layer) Incremental Clustering

    Get PDF
    In this paper, we analyse the contents of the web site of two Italian press agencies and of four of the most popular Italian newspapers, in order to answer questions such as what are the most relevant news, what is the average life of news, and how much different are different sites. To this aim, we have developed a web-based application which hourly collects the articles in the main column of the six web sites, implements an incremental clustering algorithm for grouping the articles into news, and finally allows the user to see the answer to the above questions. We have also designed and implemented a two-layer modification of the incremental clustering algorithm and executed some preliminary experimental evaluation of this modification: it turns out that the two-layer clustering is extremely efficient in terms of time performances, and it has quite good performances in terms of precision and recall

    Compact DSOP and partial DSOP Forms

    Full text link
    Given a Boolean function f on n variables, a Disjoint Sum-of-Products (DSOP) of f is a set of products (ANDs) of subsets of literals whose sum (OR) equals f, such that no two products cover the same minterm of f. DSOP forms are a special instance of partial DSOPs, i.e. the general case where a subset of minterms must be covered exactly once and the other minterms (typically corresponding to don't care conditions of ff) can be covered any number of times. We discuss finding DSOPs and partial DSOP with a minimal number of products, a problem theoretically connected with various properties of Boolean functions and practically relevant in the synthesis of digital circuits. Finding an absolute minimum is hard, in fact we prove that the problem of absolute minimization of partial DSOPs is NP-hard. Therefore it is crucial to devise a polynomial time heuristic that compares favorably with the known minimization tools. To this end we develop a further piece of theory starting from the definition of the weight of a product p as a functions of the number of fragments induced on other cubes by the selection of p, and show how product weights can be exploited for building a class of minimization heuristics for DSOP and partial DSOP synthesis. A set of experiments conducted on major benchmark functions show that our method, with a family of variants, always generates better results than the ones of previous heuristics, including the method based on a BDD representation of f

    Linear Time Distributed Swap Edge Algorithms

    Get PDF
    In this paper, we consider the all best swap edges problem in a distributed environment. We are given a 2-edge connected positively weighted network X, where all communication is routed through a rooted spanning tree T of X. If one tree edge e = {x, y} fails, the communication network will be disconnected. However, since X is 2-edge connected, communication can be restored by replacing e by non-tree edge e′, called a swap edge of e, whose ends lie in different components of T − e. Of all possible swap edges of e, we would like to choose the best, as defined by the application. The all best swap edges problem is to identify the best swap edge for every tree edge, so that in case of any edge failure, the best swap edge can be activated quickly. There are solutions to this problem for a number of cases in the literature. A major concern for all these solutions is to minimize the number of messages. However, especially in fault-transient environments, time is a crucial factor. In this paper we present a novel technique that addresses this problem from a time perspective; in fact, we present a distributed solution that works in linear time with respect to the height h of T for a number of differentcriteria, while retaining the optimal number of messages. To the best of our knowledge, all previous solutions solve the problem in O(h^2) time in the cases we consider

    Testing and reconfiguration of VLSI linear arrays

    Get PDF
    AbstractAchieving fault tolerance through incorporation of redundancy and reconfiguration is quite common. In this paper we study the fault tolerance of linear arrays of N processors with k bypass links whose maximum length is g. We consider both arrays with bidirectional links and unidirectional links.We first consider the problem of testing whether a set of n faulty processors is catastrophic, i.e., precludes reconfiguration. We provide new testing algorithms which improve and generalize known testing algorithms. For bidirectional arrays we provide an O(kn) time testing algorithm and for unidirectional arrays we provide an O(n) time algorithm for the case k = 1, and an O(kn log k) time algorithm, for the case k 1.When the fault pattern is not catastrophic we study the problem of finding an optimal reconfiguration of the array. We consider optimality with respect to two parameters: the size of the reconfigured array and the number of redundant links to activate. Considering optimality with respect to the size of the reconfigured array, we prove that the problem is NP-hard in the strong sense if the bypass links are bidirectional, while it can be solved in O(kng) time if the bypass links are unidirectional. Considering optimality with respect to the number of bypass links to activate, we prove that the problem can be solved in O(kn) time if the bypass links are bidirectional, and in O(kng) time if the bypass links are unidirectional

    More agents may decrease global work: A case in butterfly decontamination.

    No full text
    This paper is a contribution to network contamination with a view in hearted from parallel processing. At the beginning some or all the vertices may be contaminated. The network is visited by a group of decontaminating agents, When decontaminated vertex is left by the agents, it can be re-contaminanted if the number of infected neighbors exceeds a certain immunity threshold m. The Amin goal of the studies in this lane is to minimize the number of agents A needed to do the job and, for a minimum team, to minimize the number M of agent moves. Instead of m we consider the number T o steps (i.e. parallel moves) as a measure of time, and evaluate the quality of a protocol on the basis of its work W=AT. Taking butterflies networks as an example, we compare different protocols and show that, for some values of m, a larger team of agents may require smaller work
    • …
    corecore